677 research outputs found
Deep Learning for Automated Medical Image Analysis
Medical imaging is an essential tool in many areas of medical applications,
used for both diagnosis and treatment. However, reading medical images and
making diagnosis or treatment recommendations require specially trained medical
specialists. The current practice of reading medical images is labor-intensive,
time-consuming, costly, and error-prone. It would be more desirable to have a
computer-aided system that can automatically make diagnosis and treatment
recommendations. Recent advances in deep learning enable us to rethink the ways
of clinician diagnosis based on medical images. In this thesis, we will
introduce 1) mammograms for detecting breast cancers, the most frequently
diagnosed solid cancer for U.S. women, 2) lung CT images for detecting lung
cancers, the most frequently diagnosed malignant cancer, and 3) head and neck
CT images for automated delineation of organs at risk in radiotherapy. First,
we will show how to employ the adversarial concept to generate the hard
examples improving mammogram mass segmentation. Second, we will demonstrate how
to use the weakly labeled data for the mammogram breast cancer diagnosis by
efficiently design deep learning for multi-instance learning. Third, the thesis
will walk through DeepLung system which combines deep 3D ConvNets and GBM for
automated lung nodule detection and classification. Fourth, we will show how to
use weakly labeled data to improve existing lung nodule detection system by
integrating deep learning with a probabilistic graphic model. Lastly, we will
demonstrate the AnatomyNet which is thousands of times faster and more accurate
than previous methods on automated anatomy segmentation.Comment: PhD Thesi
Adversarial Deep Structured Nets for Mass Segmentation from Mammograms
Mass segmentation provides effective morphological features which are
important for mass diagnosis. In this work, we propose a novel end-to-end
network for mammographic mass segmentation which employs a fully convolutional
network (FCN) to model a potential function, followed by a CRF to perform
structured learning. Because the mass distribution varies greatly with pixel
position, the FCN is combined with a position priori. Further, we employ
adversarial training to eliminate over-fitting due to the small sizes of
mammogram datasets. Multi-scale FCN is employed to improve the segmentation
performance. Experimental results on two public datasets, INbreast and
DDSM-BCRP, demonstrate that our end-to-end network achieves better performance
than state-of-the-art approaches.
\footnote{https://github.com/wentaozhu/adversarial-deep-structural-networks.git}Comment: Accepted by ISBI2018. arXiv admin note: substantial text overlap with
arXiv:1612.0597
Automated Lensing Learner: Automated Strong Lensing Identification with a Computer Vision Technique
Forthcoming surveys such as the Large Synoptic Survey Telescope (LSST) and
Euclid necessitate automatic and efficient identification methods of strong
lensing systems. We present a strong lensing identification approach that
utilizes a feature extraction method from computer vision, the Histogram of
Oriented Gradients (HOG), to capture edge patterns of arcs. We train a
supervised classifier model on the HOG of mock strong galaxy-galaxy lens images
similar to observations from the Hubble Space Telescope (HST) and LSST. We
assess model performance with the area under the curve (AUC) of a Receiver
Operating Characteristic (ROC) curve. Models trained on 10,000 lens and
non-lens containing images images exhibit an AUC of 0.975 for an HST-like
sample, 0.625 for one exposure of LSST, and 0.809 for 10-year mock LSST
observations. Performance appears to continually improve with the training set
size. Models trained on fewer images perform better in absence of the lens
galaxy light. However, with larger training data sets, information from the
lens galaxy actually improves model performance, indicating that HOG captures
much of the morphological complexity of the arc finding problem. We test our
classifier on data from the Sloan Lens ACS Survey and find that small scale
image features reduces the efficiency of our trained model. However, these
preliminary tests indicate that some parameterizations of HOG can compensate
for differences between observed mock data. One example best-case
parameterization results in an AUC of 0.6 in the F814 filter image with other
parameterization results equivalent to random performance.Comment: 18 pages, 14 figures, summarizing results in figure
- …